hysop.backend.host.fortran.operator.diffusion module

class hysop.backend.host.fortran.operator.diffusion.DiffusionFFTW(Fin, Fout, nu, variables, dt, **kargs)[source]

Bases: FortranFFTWOperator

Diffusion operator base.

Parameters:
  • Fin (Field) – The input field to be diffused.

  • Fout (Field) – The output field to be diffused.

  • variables (dictionary of fields:topology) – The choosed discretizations.

  • nu (float or ScalarParameter.) – nu value.

  • dt (ScalarParameter) – Timestep parameter that will be used for time integration.

  • kargs – Base class parameters.

  • Notes

    *Equations:

    dF/dt = nu*Laplacian(F) in = Win out = Wout

    *Implicit resolution in Fourier space:

    F_hat(tn+1) = 1/(1-nu*dt*sum(Ki**2)) * F_hat(tn)

apply(**kwds)

Abstract method that should be implemented. Applies this node (operator, computational graph operator…).

discretize()[source]

By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.

After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.

self.discrete_fields will be a tuple containing all input and output discrete fields.

Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.

initialize(**kwds)[source]

Initialize this node.

Initialization step sets the following variables:

*self.method, *self.input_field_requirements *self.output_field_requirements *self.initialized

It returns self.method.

Order of execution is:

self.pre_initialize() self._setup_method() self.handle_method() self.get_field_requirements() self._initialized = True self.post_initialize()

See ComputationalGraphNode.handle_method() to see how user method is handled. See ComputationalGraphNode.get_field_requirements() to see how topology requirements are handled.

After this method has been handled by all operators, initialization collects min and max ghosts required by each operators which will be usefull in the discretiezation step to automatically build topologies or check against user supplied topologies.

This function also sets the self.initialized flag to True (just before post initialization). Once this flag is set one may call ComputationalGraphNode.discretize().